1,483 research outputs found
Activation functions, computational goals, and learning rules for local processors with contextual guidance
Information about context can enable local processors to discover latent variables that are relevant to the context within which they occur, and it can also guide short-term processing. For example, Becker and Hinton (1992) have shown how context can guide learning, and Hummel and Biederman (1992) have shown how it can guide processing in a large neural net for object recognition. This article studies the basic capabilities of a local processor with two distinct classes of inputs: receptive field inputs that provide the primary drive and contextual inputs that modulate their effects. The contextual predictions are used to guide processing without confusing them with receptive field inputs. The processor's transfer function must therefore distinguish these two roles. Given these two classes of input, the information in the output can be decomposed into four disjoint components to provide a space of possible goals in which the unsupervised learning of Linsker (1988) and the internally supervised learning of Becker and Hinton (1992) are special cases. Learning rules are derived from an information-theoretic objective function, and simulations show that a local processor trained with these rules and using an appropriate activation function has the elementary properties required
Recommended from our members
Graded lines: using genetic variation in neuronal projections to understand functional organization
In this thesis, I investigated the interactions between brain regions in mice, with an emphasis on the motor system. Whilst brain areas are typically studied in isolation, they exist to act upon one other, and we lack even relatively basic principles of how brain areas interact. Here, my colleagues and I used new genetic techniques to investigate this topic. There were two major focuses. Firstly, I investigated how thalamic pathways vary, using genetic sequencing to produce a high dimensional transcriptomic profile of forebrain communication pathways. This revealed a strikingly simple organisational system in rodent thalamus, with pathways serving diverse modalities varying in a common manner. Pathways serving systems as variable as vision, navigation, motor control and somatosensation showed a similar variation in gene expression, which was functionally relevant. This establishes a common reference frame for diverse modalities of cognition. Secondly, I began an ongoing investigation of the signals that motor cortex send to subcortical motor structures. This revealed a separation of signals that project to the basal ganglia and cerebellum, opposite to existing predictions. Basal Ganglia-projecting signals encoded motor execution, whilst cerebellar projecting pathways encoded preparation- and reward-related information. These experiments motivated further investigation of these pathways, which are described within. Together, these results demonstrate the utility of focussing upon interactions between network nodes to reveal the contribution of individual nodes.HHM
Finding The Beta For A Portfolio Isn't Obvious: An Educational Example
When a portfolio is not actively managed to maintain a fixed investment percentage in each asset but rather maintains a fixed number of shares for each asset, the portfolio weights will change over time because the market returns of the different assets will not be the same. Consequently, portfolio betas computed as a linear combination of asset betas, which is the usual practice, will be different from betas computed using regression techniques on portfolio returns as is done when evaluating individual assets and mutual funds. The alternative approaches can result in quite different beta statistics and, consequently, inconsistent decisions depending on which method is used. 
Why Downside Beta Is Better: An Educational Example
An educational example is presented that is an effective teaching illustration to help students understand the difference between traditional CAPM beta and downside (or down-market) beta and why downside beta is a superior measure for use in personal financial planning investment policy statements
In memorium: Jasper Daube MD
Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/156241/2/mus26916_am.pdfhttp://deepblue.lib.umich.edu/bitstream/2027.42/156241/1/mus26916.pd
Asset Attribution Stability And Portfolio Construction: An Educational Example
This paper illustrates how a third statistic from asset pricing models, the R-squared statistic, may have information that can help in portfolio construction. Using a traditional CAPM model in comparison to an 18-factor Arbitrage Pricing Style Model, a portfolio separation test is conducted. Portfolio returns and risk metrics are compared using data from the Dow Jones 30 stocks over the period January 2007 through October 2013. Various teaching points are discussed and illustrated
Flight Investigation of the Effectiveness of an Automatic Aileron Trim Control Device for Personal Airplanes
A flight investigation to determine the effectiveness of an automatic aileron trim control device installed in a personal airplane to augment the apparent spiral stability has been conducted. The device utilizes a rate-gyro sensing element in order to switch an on-off type of control that operates the ailerons at a fixed rate through control centering springs. An analytical study using phase-plane and analog-computer methods has been carried out to determine a desirable method of operation for the automatic trim control
Nonlinear computations in spiking neural networks through multiplicative synapses
The brain efficiently performs nonlinear computations through its intricate
networks of spiking neurons, but how this is done remains elusive. While
nonlinear computations can be implemented successfully in spiking neural
networks, this requires supervised training and the resulting connectivity can
be hard to interpret. In contrast, the required connectivity for any
computation in the form of a linear dynamical system can be directly derived
and understood with the spike coding network (SCN) framework. These networks
also have biologically realistic activity patterns and are highly robust to
cell death. Here we extend the SCN framework to directly implement any
polynomial dynamical system, without the need for training. This results in
networks requiring a mix of synapse types (fast, slow, and multiplicative),
which we term multiplicative spike coding networks (mSCNs). Using mSCNs, we
demonstrate how to directly derive the required connectivity for several
nonlinear dynamical systems. We also show how to carry out higher-order
polynomials with coupled networks that use only pair-wise multiplicative
synapses, and provide expected numbers of connections for each synapse type.
Overall, our work demonstrates a novel method for implementing nonlinear
computations in spiking neural networks, while keeping the attractive features
of standard SCNs (robustness, realistic activity patterns, and interpretable
connectivity). Finally, we discuss the biological plausibility of our approach,
and how the high accuracy and robustness of the approach may be of interest for
neuromorphic computing.Comment: This article has been peer-reviewed and recommended by Peer Community
In Neuroscienc
- …